203 research outputs found
The challenge of complexity for cognitive systems
Complex cognition addresses research on (a) high-level cognitive processes â mainly problem solving, reasoning, and decision making â and their interaction with more basic processes such as perception, learning, motivation and emotion and (b) cognitive processes which take place in a complex, typically dynamic, environment. Our focus is on AI systems and cognitive models dealing with complexity and on psychological findings which can inspire or challenge cognitive systems research. In this overview we first motivate why we have to go beyond models for rather simple cognitive processes and reductionist experiments. Afterwards, we give a characterization of complexity from our perspective. We introduce the triad of cognitive science methods â analytical, empirical, and engineering methods â which in our opinion have all to be utilized to tackle complex cognition. Afterwards we highlight three aspects of complex cognition â complex problem solving, dynamic decision making, and learning of concepts, skills and strategies. We conclude with some reflections about and challenges for future research
Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning in Cybersecurity Games
Designing cyber defense systems to account for cognitive biases in human
decision making has demonstrated significant success in improving performance
against human attackers. However, much of the attention in this area has
focused on relatively simple accounts of biases in human attackers, and little
is known about adversarial behavior or how defenses could be improved by
disrupting attacker's behavior. In this work, we present a novel model of human
decision-making inspired by the cognitive faculties of Instance-Based Learning
Theory, Theory of Mind, and Transfer of Learning. This model functions by
learning from both roles in a security scenario: defender and attacker, and by
making predictions of the opponent's beliefs, intentions, and actions. The
proposed model can better defend against attacks from a wide range of opponents
compared to alternatives that attempt to perform optimally without accounting
for human biases. Additionally, the proposed model performs better against a
range of human-like behavior by explicitly modeling human transfer of learning,
which has not yet been applied to cyber defense scenarios. Results from
simulation experiments demonstrate the potential usefulness of cognitively
inspired models of agents trained in attack and defense roles and how these
insights could potentially be used in real-world cybersecurity
The Role of Inertia in Modeling Decisions from Experience with Instance-Based Learning
One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model
Learning About Simulated Adversaries from Human Defenders using Interactive Cyber-Defense Games
Given the increase in cybercrime, cybersecurity analysts (i.e. Defenders) are
in high demand. Defenders must monitor an organization's network to evaluate
threats and potential breaches into the network. Adversary simulation is
commonly used to test defenders' performance against known threats to
organizations. However, it is unclear how effective this training process is in
preparing defenders for this highly demanding job. In this paper, we
demonstrate how to use adversarial algorithms to investigate defenders'
learning of defense strategies, using interactive cyber defense games. Our
Interactive Defense Game (IDG) represents a cyber defense scenario that
requires constant monitoring of incoming network alerts and allows a defender
to analyze, remove, and restore services based on the events observed in a
network. The participants in our study faced one of two types of simulated
adversaries. A Beeline adversary is a fast, targeted, and informed attacker;
and a Meander adversary is a slow attacker that wanders the network until it
finds the right target to exploit. Our results suggest that although human
defenders have more difficulty to stop the Beeline adversary initially, they
were able to learn to stop this adversary by taking advantage of their attack
strategy. Participants who played against the Beeline adversary learned to
anticipate the adversary and take more proactive actions, while decreasing
their reactive actions. These findings have implications for understanding how
to help cybersecurity analysts speed up their training.Comment: Submitted to Journal of Cybersecurit
Security under Uncertainty: Adaptive Attackers Are More Challenging to Human Defenders than Random Attackers
Game Theory is a common approach used to understand attacker and defender motives, strategies, and allocation of limited security resources. For example, many defense algorithms are based on game-theoretic solutions that conclude that randomization of defense actions assures unpredictability, creating difficulties for a human attacker. However, many game-theoretic solutions often rely on idealized assumptions of decision making that underplay the role of human cognition and information uncertainty. The consequence is that we know little about how effective these algorithms are against human players. Using a simplified security game, we study the type of attack strategy and the uncertainty about an attacker's strategy in a laboratory experiment where participants play the role of defenders against a simulated attacker. Our goal is to compare a human defender's behavior in three levels of uncertainty (Information Level: Certain, Risky, Uncertain) and three types of attacker's strategy (Attacker's strategy: Minimax, Random, Adaptive) in a between-subjects experimental design. Best defense performance is achieved when defenders play against a minimax and a random attack strategy compared to an adaptive strategy. Furthermore, when payoffs are certain, defenders are as efficient against random attack strategy as they are against an adaptive strategy, but when payoffs are uncertain, defenders have most difficulties defending against an adaptive attacker compared to a random attacker. We conclude that given conditions of uncertainty in many security problems, defense algorithms would be more efficient if they are adaptive to the attacker actions, taking advantage of the attacker's human inefficiencies
Design of Dynamic and Personalized Deception: A Research Framework and New Insights
Deceptive defense techniques (e.g., intrusion detection, firewalls, honeypots, honeynets) are commonly used to prevent cyberattacks. However, most current defense techniques are generic and static, and are often learned and exploited by attackers. It is important to advance from static to dynamic forms of defense that can actively adapt a defense strategy according to the actions taken by individual attackers during an active attack. Our novel research approach relies on cognitive models and experimental games: Cognitive models aim at replicating an attackerâs behavior allowing the creation of personalized, dynamic deceptive defense strategies; experimental games help study human actions, calibrate cognitive models, and validate deceptive strategies. In this paper we offer the following contributions: (i) a general research framework for the design of dynamic, adaptive and personalized deception strategies for cyberdefense; (ii) a summary of major insights from experiments and cognitive models developed for security games of increased complexity; and (iii) a taxonomy of potential deception strategies derived from our research program so far
Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models
Developing effective Multi-Agent Systems (MAS) is critical for many
applications requiring collaboration and coordination with humans. Despite the
rapid advance of Multi-Agent Deep Reinforcement Learning (MADRL) in cooperative
MAS, one major challenge is the simultaneous learning and interaction of
independent agents in dynamic environments in the presence of stochastic
rewards. State-of-the-art MADRL models struggle to perform well in Coordinated
Multi-agent Object Transportation Problems (CMOTPs), wherein agents must
coordinate with each other and learn from stochastic rewards. In contrast,
humans often learn rapidly to adapt to nonstationary environments that require
coordination among people. In this paper, motivated by the demonstrated ability
of cognitive models based on Instance-Based Learning Theory (IBLT) to capture
human decisions in many dynamic decision making tasks, we propose three
variants of Multi-Agent IBL models (MAIBL). The idea of these MAIBL algorithms
is to combine the cognitive mechanisms of IBLT and the techniques of MADRL
models to deal with coordination MAS in stochastic environments from the
perspective of independent learners. We demonstrate that the MAIBL models
exhibit faster learning and achieve better coordination in a dynamic CMOTP task
with various settings of stochastic rewards compared to current MADRL models.
We discuss the benefits of integrating cognitive insights into MADRL models.Comment: 22 pages, 5 figures, 2 table
A Cyber-War Between Bots: Human-Like Attackers are More Challenging for Defenders than Deterministic Attackers
Adversary emulation is commonly used to test cyber defense performance against known threats to organizations. However, designing attack strategies is an expensive and unreliable manual process, based on subjective evaluation of the state of a network. In this paper, we propose the design of adversarial human-like cognitive models that are dynamic, adaptable, and have the ability to learn from experience. A cognitive model is built according to the theoretical principles of Instance-Based Learning Theory (IBLT) of experiential choice in dynamic tasks. In a simulation experiment, we compared the predictions of an IBL attacker with a carefully designed efficient but deterministic attacker attempting to access an operational server in a network. The results suggest that an IBL cognitive model that emulates human behavior can be a more challenging adversary for defenders than the carefully crafted optimal attack strategies. These insights can be used to inform future adversary emulation efforts and cyber defender training
- âŠ